regulating ai
Regulating AI Is Easier Than You Think
Artificial intelligence is poised to deliver tremendous benefits to society. But, as many have pointed out, it could also bring unprecedented new horrors. As a general-purpose technology, the same tools that will advance scientific discovery could also be used to develop cyber, chemical, or biological weapons. Governing AI will require widely sharing its benefits while keeping the most powerful AI out of the hands of bad actors. The good news is that there is already a template on how to do just that.
- Law > Statutes (0.85)
- Government > Military (0.70)
- Energy > Power Industry > Utilities > Nuclear (0.49)
What We Can Learn About Regulating AI from the Military
In a bustling restaurant in downtown Anytown, USA, an overwhelmed manager turns to AI to help with staff shortages and customer service. Across town, a harried newspaper publisher leverages AI to help generate news content. Both are part of a growing number who rely on AI for everyday business needs. But what happens when the technology errs, or worse, poses risks we haven't fully considered? The current policy conversation is heavily geared toward the eight or so powerful companies that make AI.
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.71)
- Government > Military (0.48)
Regulating AI is going to be hard but big tech transparency is key
IT IS increasingly obvious that we are on the cusp of a revolution in artificial intelligence that will be no less profound than the arrival of the printing press or the internet, as we explore in this special issue. Nobody can say for sure exactly what this future will be, but optimists – including many of those working in the companies behind the technologies – foresee one in which AI will allow us to live our best lives (see "How this moment for AI will change society forever").
- Law (0.40)
- Government (0.40)
Regulating AI: 3 experts explain why it's difficult to do and important to get right
From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very-much-alive computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation on steroids. Indeed, a group of artificial intelligence researchers and industry figures urged the industry on March 29, 2023, to pause further training of the latest AI technologies or, barring that, for governments to "impose a moratorium." These technologies – image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA – are now available to millions of people and don't require technical knowledge to use. Given the potential for widespread harm as technology companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it's so important to get it right.
- North America > United States > New York (0.25)
- North America > United States > California > Los Angeles County > Los Angeles (0.15)
- North America > United States > Texas (0.05)
- Law > Statutes (0.96)
- Government > Regional Government > North America Government > United States Government (0.35)
Regulating AI: Will It Be Enough to Keep Us Safe from Its Dangers? - Bytefeed - News Powered by AI
Artificial intelligence (AI) has been making a lot of headlines lately, and with good reason. AI is quickly becoming more and more sophisticated, allowing it to be used in a variety of ways that can benefit both businesses and individuals alike. But while the potential benefits are clear, there's also some concern about how these powerful technologies may be misused or abused by those who don't have our best interests at heart. As such, many governments around the world are looking into various forms of regulation for AI technology as they seek to protect their citizens from any potentially negative consequences that could arise from its misuse. However, despite this increased focus on regulating AI technology for safety purposes – one thing remains unclear: How effective will this form of regulation really end up being?
- North America > United States (0.17)
- Europe (0.17)
- Government (1.00)
- Law > Statutes (0.78)
- Information Technology > Security & Privacy (0.52)
One of the Biggest Problems in Regulating AI Is Agreeing on a Definition
In 2017, spurred by advocacy from civil society groups, the New York City Council created a task force to address the city's growing use of artificial intelligence. But the task force quickly ran aground attempting to come to a consensus on the scope of "automated decision systems." In one hearing, a city agency argued that the task force's definition was so expansive that it might include simple calculations such as formulas in spreadsheets. By the end of its eighteen-month term, the task force's ambitions had narrowed from addressing how the city uses automated decision systems to simply defining the types of systems that should be subject to oversight. As policymakers around the world have attempted to create guidance and regulation for AI's use in settings ranging from school admissions and home loan approvals to military weapon targeting systems, they all face the same problem: AI is really challenging to define.
- Law > Statutes (0.71)
- Government > Regional Government > North America Government > United States Government (0.69)
Regulating AI: What marketers need to know
In June, the Canadian government proposed new legislation to regulate artificial intelligence (AI). The proposed Artificial Intelligence and Data Act (AIDA) is part of Bill C-27, which also proposes a new privacy framework, the Consumer Privacy Protection Act (For more on federal privacy reform, see our recent blog). If passed, AIDA would be the first comprehensive law in Canada regulating AI. AIDA intends to promote the responsible use of AI. It aims to ensure high-impact AI systems are developed in a way that mitigates risk of harm and bias.
- Government (1.00)
- Law > Statutes (0.53)
Regulating AI Through Data Privacy
In the absence of a national data privacy law in the U.S., California has been more active than any other state in efforts to fill the gap on a state level. The state enacted one of the nation's first data privacy laws, the California Privacy Rights Act (Proposition 24) in 2020, and an additional law will take effect in 2023. A new state agency created by the law, the California Privacy Protection Agency, recently issued an invitation for public comment on the many open questions surrounding the law's implementation. Our team of Stanford researchers, graduate students, and undergraduates examined the proposed law and have concluded that data privacy can be a useful tool in regulating AI, but California's new law must be more narrowly tailored to prevent overreach, focus more on AI model transparency, and ensure people's rights to delete their personal information are not usurped by the use of AI. Additionally, we suggest that the regulation's proposed transparency provision requiring companies to explain to consumers the logic underlying their "automated decision making" processes could be more powerful if it instead focused on providing greater transparency about the data used to enable such processes. Finally, we argue that the data embedded in machine-learning models must be explicitly included when considering consumers' rights to delete, know, and correct their data.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
OECD Offers Policy Advice for Regulating AI in Financial Services
Among the recommendations are the introduction of suitability requirements for AI-driven financial services, and add-on capital buffers based on AI algorithms. The OECD has published a new report offering policy recommendations to ensure the use of artificial intelligence (AI), machine learning (ML) and big data in finance is consistent with financial stability, consumer protection, and market integrity and competition objectives. While noting that AI can drive competitive advantages for financial firms, improve their efficiency, and enhance services for consumers, the report says AI applications in finance may create or intensify financial and non-financial risks, and give rise to potential financial consumer and investor protection concerns around the fairness of consumer results, data management and data usage. The report says emerging risks from the deployment of AI techniques need to be identified and mitigated to support and promote the use of responsible AI, and existing regulatory and supervisory requirements may need to be clarified and adjusted to address incompatibilities of existing arrangements with AI applications. In particular, policymakers should consider sharpening their focus on better data governance by financial sector firms to reinforce consumer protection across AI applications in finance, and address risks related to data privacy, confidentiality, concentration of data, and unintended bias and discrimination.
- Banking & Finance > Financial Services (0.77)
- Information Technology > Security & Privacy (0.75)
- Banking & Finance > Trading (0.73)
AI Summit 2020: Regulating AI for the common good
Artificial intelligence requires carefully considered regulation to ensure technologies balance cooperation and competition for the greater good, according to expert speakers at the AI Summit 2020. As a general purpose technology, artificial intelligence (AI) can be used in a staggering array of contexts, with many advocates framing its rapid development as a cooperative endeavour for the benefit of all humanity. The United Nations, for example, launched it's AI for Good initiative in 2017, while the French and Chinese governments talk of "AI for Humanity" and "AI for the benefit of mankind" respectively – rhetoric echoed by many other governments and supra-national bodies across the world. On the other hand, these same advocates also use language and rhetoric that emphasises the competitive advantages AI could bring in the more narrow pursuit of national interest. "Just as in international politics, there's a tension between an agreed aspiration to build AI for humanity, and for the common good, and the more selfish and narrow drive to compete to have advantage," said Allan Dafoe, director of the Centre for the Governance of AI at Oxford University, speaking at the AI Summit, which took place online this week.
- Asia > China (0.46)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)